Neural Computing on a One Dimensional SIMD Array
نویسنده
چکیده
Parallel processors offer a very attractive mechanism for the implementation of large neural networks. Problems in the usage of parallel processing in neural computing involve the difficulty of handling the large amount of global communication between processing units and the storage of weights for each of the neural processor connections. This paper wi l l discuss how massive parallelism in the form of a one dimensional SIMD array can handle indefinitely large networks in near real time by efficiently organizing the memory storage of weights, and input and output signals. Very little overhead time is used in communication of signals between processing units, and there is no idle time for any of the units. An advantage of SIMD array systems is that the arithmetic processing is done bit serially, wi th the result that trade-offs can be easily be made between the processor speed and precision of the signals and weights.
منابع مشابه
Optimizing neural networks on SIMD parallel computers
Hop eld neural networks are often used to solve diÆcult combinatorial optimization problems. Multiple restarts versions nd better solutions but are slow on serial computers. Here, we study two parallel implementations on SIMD computers of multiple restarts Hop eld networks for solving the maximum clique problem. The rst one is a ne-grained implementation on the Kestrel Parallel Processor, a lin...
متن کاملCumulus - a Scalable Systolic Array for Binary Pattern Recognition and Networks of Associative Memories Lix | Technical Report
Classiication of binary patterns can be done highly eeciently in a bit-sequential way using asynchronous counters for the determination of the Hamming distance between two patterns. As each pattern has to be compared with a large number of database prototypes, this inherent SIMD parallelism can be easily exploited on an array of counters which are connected to an array of memory units containin...
متن کاملPure SIMD Processor Arrays with a Two-Dimensional Reconfigurable Network Do Not Support Virtual Parallelism♣
Abstract The support of virtual parallelism is important because it allows to consider the complexity measurements of the parallel algorithms valid in those implementations in which the size of the processor array is smaller than the problem size. In this paper we demonstrate that pure SIMD RPAs, i. e. with no addressing autonomy, that allow to establish tree-shaped two-dimensional buses do not...
متن کاملA Scalable Bit - Sequential SIMD Array for Nearest - NeighborClassi cation using the City - Block
We present a fully scalable SIMD array architecture for a most eecient implementation of pattern classiication by nearest-neighbor algorithms using the city-block metric. The elementary accumulator cell is highly optimized for a sequential accumulation of absolute integer diierences, so that several hundreds of them can be easily integrated on a single chip. A two-dimensional M N array structur...
متن کاملMapping of neural networks onto the memory-processor integrated architecture
In this paper, an effective memory-processor integrated architecture, called memory-based processor array for artificial neural networks (MPAA), is proposed. The MPAA can be easily integrated into any host system via memory interface. Specifically, the MPA system provides an efficient mechanism for its local memory accesses allowed by row and column bases, using hybrid row and column decoding, ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1989